The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Data-Free Class Incremental Learning (DFCIL) aims to sequentially learn tasks with access only to data from the current one. DFCIL is of interest because it mitigates concerns about privacy and long-term storage of data, while at the same time alleviating the problem of catastrophic forgetting in incremental learning. In this work, we introduce robust saliency guidance for DFCIL and propose a new framework, which we call RObust Saliency Supervision (ROSS), for mitigating the negative effect of saliency drift. Firstly, we use a teacher-student architecture leveraging low-level tasks to supervise the model with global saliency. We also apply boundary-guided saliency to protect it from drifting across object boundaries at intermediate layers. Finally, we introduce a module for injecting and recovering saliency noise to increase robustness of saliency preservation. Our experiments demonstrate that our method can retain better saliency maps across tasks and achieve state-of-the-art results on the CIFAR-100, Tiny-ImageNet and ImageNet-Subset DFCIL benchmarks. Code will be made publicly available.
translated by 谷歌翻译
Federated learning (FL) enables the building of robust and generalizable AI models by leveraging diverse datasets from multiple collaborators without centralizing the data. We created NVIDIA FLARE as an open-source software development kit (SDK) to make it easier for data scientists to use FL in their research and real-world applications. The SDK includes solutions for state-of-the-art FL algorithms and federated machine learning approaches, which facilitate building workflows for distributed learning across enterprises and enable platform developers to create a secure, privacy-preserving offering for multiparty collaboration utilizing homomorphic encryption or differential privacy. The SDK is a lightweight, flexible, and scalable Python package, and allows researchers to bring their data science workflows implemented in any training libraries (PyTorch, TensorFlow, XGBoost, or even NumPy) and apply them in real-world FL settings. This paper introduces the key design principles of FLARE and illustrates some use cases (e.g., COVID analysis) with customizable FL workflows that implement different privacy-preserving algorithms. Code is available at https://github.com/NVIDIA/NVFlare.
translated by 谷歌翻译
在本文中,我们提出了一种用于HSI去噪的强大主成分分析的新型非耦合方法,其侧重于分别同时为低级和稀疏组分的等级和列方向稀疏性产生更准确的近似。特别是,新方法采用日志确定级别近似和新颖的$ \ ell_ {2,\ log} $常规,以便分别限制组件矩阵的本地低级或列明智地稀疏属性。对于$ \ ell_ {2,\ log} $ - 正常化的收缩问题,我们开发了一个高效的封闭式解决方案,该解决方案名为$ \ ell_ {2,\ log} $ - 收缩运算符。新的正则化和相应的操作员通常可以用于需要列明显稀疏性的其他问题。此外,我们在基于日志的非凸rpca模型中强加了空间光谱总变化正则化,这增强了从恢复的HSI中的空间和光谱视图中的全局转换平滑度和光谱一致性。关于模拟和实际HSIS的广泛实验证明了所提出的方法在去噪HSIS中的有效性。
translated by 谷歌翻译
在本文中,我们采取了一种数据驱动方法,并在板坯几何中辐射传输方程的辐射传输方程施加机器学习。我们建议使用神经网络直接学习高阶时刻的梯度。这种新方法与我们导出的自由流限制的精确关闭一致,并提供自然输出标准化。各种基准测试,包括可变散射问题,具有周期性和反映边界的高斯源问题,以及两端问题,显示了我们机器学习闭合模型的良好准确性和完全性。
translated by 谷歌翻译
加州无罪项目(CIP)是一个旨在获得自由被错误定罪的囚犯的临床法学学校计划,评估数千封邮件,其中包含了新请求的帮助和相应的案件文件。处理和解释这一大量信息对CIP官员提出了重大挑战,这可以通过主题建模技术成功地辅助。在本文中,我们应用非负矩阵分解(NMF)方法并实现重要的各种分支机构先前未捕获的数据集由CIP编译。我们识别现有案例文件的基础主题,并按犯罪类型和案例状态(判定类型)对请求文件进行分类。结果揭示了当前案例文件的语义结构,可以在进一步考试之前为新收到的案例文件提供CIP官员。我们还提供了对NMF的流行变体进行了实验结果,并通过现实世界应用探讨了每个变体的益处和缺点。
translated by 谷歌翻译
学习普遍面孔表示的最佳方法是什么?在面部分析领域进行深度学习的最新工作集中在监督方面的学习特定任务(例如面部识别,面部地标本地化等),但忽略了如何找到可以轻松适应面部表征的总体问题到几个面部分析任务和数据集。为此,我们做出以下4个贡献:(a)我们首次介绍面部表示学习的全面评估基准,该基准由5个重要​​的面部分析任务组成。 (b)我们系统地研究了应用于面孔的大规模表示学习的两种方式:受监督和无监督的预训练。重要的是,我们将评估重点放在几乎没有面部学习的情况下。 (c)我们研究了培训数据集的重要特性,包括其大小和质量(标记,未标记甚至未经保育)。 (d)为了得出结论,我们进行了大量实验。我们的主要两个发现是:(1)完全在野外的未经监督的预培训,未经保育的数据提供了一致的,在某些情况下,对所有面部任务进行了显着准确的改进。 (2)许多现有的面部视频数据集似乎具有大量冗余。我们将发布代码和预先培训的模型,以促进未来的研究。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译